Facebook remained in Nancy Pelosi purgatory this week, as the Speaker of the House slammed Facebook for refusing to remove the doctored video that made her look drunk. Facebook held its ground.
As you can imagine, the chattering types in Silicon Valley have been busy spewing their opinions. There are essentially two opposing viewpoints: (1) free speech at all costs requires Facebook to keep it up or (2) Facebook is a service, it writes its own rules for what it wants on its service—aka, its terms of service which already police all kinds of content—and its shameful that they decided to keep this video up.
It won’t surprise you that I find problems with both arguments, although I can think of one solution that I am surprised hasn’t been proposed. I’ll get to that.
But first, there is something that is driving me crazy about Facebook’s reaction that deserves more attention: the fact that the company continues to distinguish between “keeping up” fake content and actively “distributing it.” That’s a fake distinction that they should give up.
Here’s a refresher on Facebook’s current fake news playbook—which is how they treated the doctored Nancy Pelosi video. If third-party fact checkers label something as fake, Facebook puts a vague note on it citing there has been “additional reporting.” They reduce its distribution, meaning they cut back the number of people who see it in their News Feeds. By how much? That’s not clear, but in the past, the company has said by an average of 80%.
Facebook keeps the content up because they don’t want to be censoring speech, even claims that third parties find objectively false. But they don’t want to spread them either because they know users don’t want to see fake news. And Section 230 of the Communications Decency Act protects Facebook for being liable for what others post.
This strategy doesn’t make sense to me. Distribution is a gradient with no bright lines. On Facebook, 10 people or 100 people or 10 million people could see something depending on Facebook’s News Feed algorithm and how people react to the content. Facebook isn’t more or less responsible for something based on the number of people who see it.
The idea that the company is upholding the First Amendment by leaving it up but trying to hide it from the world (a type of censorship) seems problematic too. And if the company really doesn’t want something on Facebook—which is a position the company takes in many other circumstances related to nudity, hate speech and more—then they should take it off. That’s what YouTube did in the case of the Pelosi video.
No one who works at Facebook will give me an on-the-record reason why they try to split the difference. Is it to placate conservative critics that would be furious if the fake video were to be removed? Is it because the company’s leadership feels very uncomfortable with being used to spread fake news and thinks their strategy is a clever compromise?
Smart people, like those who work at Facebook, can come up with clever ideas and can tolerate a lot of complexity. But this one just doesn’t pass the smell test. If you want to take a principled stand on whether something should be on your service, do so. But don’t pretend that tuning a black box dial up or down a little allows you to have it both ways.
Facebook’s leaders want us to believe that there are different types of distribution on Facebook. You can send or see something in a private message, a group with 100 people or in your News Feed. Mark Zuckerberg is betting the future of the company on the idea that what people share in messaging should be treated differently than what they see in News Feed, encrypting the former and likely submitting to more regulation on the latter.
If you want to take a principled stand on whether something should be on your service, do so. But don’t pretend that tuning a black box dial up or down a little allows you to have it both ways.
Encrypting messaging will draw a clear line: Facebook won’t be able to police your messages even if they want to, meaning fake content will proliferate. But, as people who work at Facebook like to point out, it also proliferates in the private conversations real people have with each other all the time. On that, they have a point.
But as for the non-messaging stuff, I don’t see clear differences in other parts of the service. At the end of the day, people are using Facebook to share content and people are seeing it. Setting different rules based on how or where or how many people see it seems irrelevant.
So, back to my idea, what should the policy be on fakes like the Nancy Pelosi video?
I find myself looking to how platforms police copyrighted material: when they are alerted to it, they have to take it down. This is the law under the Digital Millennium Copyright Act of 1998, and it works pretty well.
There are obviously major problems with a system where people can flag anything as fake and require Facebook to take it down. We don’t want that. But what about a system where individuals can force Facebook and other internet companies to remove information they can prove is fake about themselves?
The idea raises hairy questions, such as how much of the source material around a fake has to be related to someone to qualify as theirs. What if a fake video uses my face but not my voice, for example? Can I petition to take it down?
I don’t have easy answers to that. Some lines would have to be drawn. But I think we must start to address these issues as we enter a world where fakes are going to be easier and easier to make.
Thanks to image and voice manipulation technology, many more people are going to find themselves Nancy Pelosied. Facebook and politicians should come up with a better solution than we’ll leave the videos up and hope no one sees them.